3,281 research outputs found

    Preventing and Reversing Vacuum-Induced Optical Losses in High-Finesse Tantalum (V) Oxide Mirror Coatings

    Get PDF
    We study the vacuum-induced degradation of high-finesse optical cavities with mirror coatings composed of SiO2_2-Ta2_{2}O5_{5} dielectric stacks, and present methods to protect these coatings and to recover their initial quality factor. For separate coatings with reflectivities centered at 370 nm and 422 nm, a vacuum-induced continuous increase in optical loss occurs if the surface-layer coating is made of Ta2_{2}O5_{5}, while it does not occur if it is made of SiO2_2. The incurred optical loss can be reversed by filling the vacuum chamber with oxygen at atmospheric pressure, and the recovery rate can be strongly accelerated by continuous laser illumination at 422 nm. Both the degradation and the recovery processes depend strongly on temperature. We find that a 1 nm-thick layer of SiO2_2 passivating the Ta2_{2}O5_{5} surface layer is sufficient to reduce the degradation rate by more than a factor of 10, strongly supporting surface oxygen depletion as the primary degradation mechanism.Comment: 14 pages, 7 figure

    Rerandomization and Regression Adjustment

    Full text link
    Randomization is a basis for the statistical inference of treatment effects without strong assumptions on the outcome-generating process. Appropriately using covariates further yields more precise estimators in randomized experiments. R. A. Fisher suggested blocking on discrete covariates in the design stage or conducting analysis of covariance (ANCOVA) in the analysis stage. We can embed blocking into a wider class of experimental design called rerandomization, and extend the classical ANCOVA to more general regression adjustment. Rerandomization trumps complete randomization in the design stage, and regression adjustment trumps the simple difference-in-means estimator in the analysis stage. It is then intuitive to use both rerandomization and regression adjustment. Under the randomization-inference framework, we establish a unified theory allowing the designer and analyzer to have access to different sets of covariates. We find that asymptotically (a) for any given estimator with or without regression adjustment, rerandomization never hurts either the sampling precision or the estimated precision, and (b) for any given design with or without rerandomization, our regression-adjusted estimator never hurts the estimated precision. Therefore, combining rerandomization and regression adjustment yields better coverage properties and thus improves statistical inference. To theoretically quantify these statements, we discuss optimal regression-adjusted estimators in terms of the sampling precision and the estimated precision, and then measure the additional gains of the designer and the analyzer. We finally suggest using rerandomization in the design and regression adjustment in the analysis followed by the Huber--White robust standard error

    Phantom dark energy with varying-mass dark matter particles: acceleration and cosmic coincidence problem

    Full text link
    We investigate several varying-mass dark-matter particle models in the framework of phantom cosmology. We examine whether there exist late-time cosmological solutions, corresponding to an accelerating universe and possessing dark energy and dark matter densities of the same order. Imposing exponential or power-law potentials and exponential or power-law mass dependence, we conclude that the coincidence problem cannot be solved or even alleviated. Thus, if dark energy is attributed to the phantom paradigm, varying-mass dark matter models cannot fulfill the basic requirement that led to their construction.Comment: 11 pages, 5 figure
    corecore